动物运动跟踪和姿势识别的进步一直是动物行为研究的游戏规则改变者。最近,越来越多的作品比跟踪“更深”,并解决了对动物内部状态(例如情绪和痛苦)的自动认识,目的是改善动物福利,这使得这是对该领域进行系统化的及时时刻。本文对基于计算机的识别情感状态和动物的疼痛的研究进行了全面调查,并涉及面部行为和身体行为分析。我们总结了迄今为止在这个主题中所付出的努力 - 对它们进行分类,从不同的维度进行分类,突出挑战和研究差距,并提供最佳实践建议,以推进该领域以及一些未来的研究方向。
translated by 谷歌翻译
今天的大多数动作识别模型都是高度参数化的,并在具有主要空间不同类的数据集上进行评估。以前的单个图像的结果表明,2D卷积神经网络(CNNS)倾向于偏向纹理而不是各种计算机视觉任务的形状(Geirhos等,2019),减少了概括。总之,这提出了怀疑大型视频模型学习虚假相关性,而不是随着时间的推移跟踪相关形状并从运动中推断出可推断的语义。当随着时间的推移学习视觉模式时,一种自然的方法是在学习视觉模式时是在时间轴上使用复发。在本文中,我们经验分别研究了经常性,关注和卷积视频模型的跨域稳健性,以研究这种鲁棒性是否受帧依赖性建模的影响。我们的新型时间形状数据集被提出为轻量级数据集,以评估跨越不从单帧透露的时间形状概括的能力。我们发现,当控制性能和层结构时,复发模型比基于卷积和关注的模型在时间形状数据集上显示出更好的域泛化能力。此外,我们的实验表明,基于卷积和关注的模型比经常性模型在潜水48上表现出更多的质地偏差。
translated by 谷歌翻译
骨科疾病在马匹中常见,通常导致安乐死,这通常可以通过早期的检测来避免。这些条件通常会产生不同程度的微妙长期疼痛。培训视觉疼痛识别方法具有描绘这种疼痛的视频数据是挑战性的,因为所产生的疼痛行为也是微妙的,稀疏出现,变得不同,使得甚至是专家兰德尔的挑战,为数据提供准确的地面真实性。我们表明,一款专业培训的模型,仅涉及急性实验疼痛的马匹(标签不那么暧昧)可以帮助识别更微妙的骨科疼痛显示。此外,我们提出了一个问题的人类专家基线,以及对各种领域转移方法的广泛实证研究以及由疼痛识别方法检测到矫形数据集的清洁实验疼痛中的疼痛识别方法检测到的内容。最后,这伴随着围绕现实世界动物行为数据集所带来的挑战以及如何为类似的细粒度行动识别任务建立最佳实践的讨论。我们的代码可在https://github.com/sofiabroome/painface-recognition获得。
translated by 谷歌翻译
It is indisputable that physical activity is vital for an individual's health and wellness. However, a global prevalence of physical inactivity has induced significant personal and socioeconomic implications. In recent years, a significant amount of work has showcased the capabilities of self-tracking technology to create positive health behavior change. This work is motivated by the potential of personalized and adaptive goal-setting techniques in encouraging physical activity via self-tracking. To this end, we propose UBIWEAR, an end-to-end framework for intelligent physical activity prediction, with the ultimate goal to empower data-driven goal-setting interventions. To achieve this, we experiment with numerous machine learning and deep learning paradigms as a robust benchmark for physical activity prediction tasks. To train our models, we utilize, "MyHeart Counts", an open, large-scale dataset collected in-the-wild from thousands of users. We also propose a prescriptive framework for self-tracking aggregated data preprocessing, to facilitate data wrangling of real-world, noisy data. Our best model achieves a MAE of 1087 steps, 65% lower than the state of the art in terms of absolute error, proving the feasibility of the physical activity prediction task, and paving the way for future research.
translated by 谷歌翻译
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing. A key issue is how to address the inherent non-linearity of classical deep learning, a problem in the quantum domain due to the fact that the composition of an arbitrary number of quantum gates, consisting of a series of sequential unitary transformations, is intrinsically linear. This problem has been variously approached in the literature, principally via the introduction of measurements between layers of unitary transformations. In this paper, we introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning typically associated with superior generalization performance in the classical domain, specifically, hierarchical feature learning. Our approach generalizes the notion of Quantum Neural Tangent Kernel, which has been used to study the dynamics of classical and quantum machine learning models. The Quantum Path Kernel exploits the parameter trajectory, i.e. the curve delineated by model parameters as they evolve during training, enabling the representation of differential layer-wise convergence behaviors, or the formation of hierarchical parametric dependencies, in terms of their manifestation in the gradient space of the predictor function. We evaluate our approach with respect to variants of the classification of Gaussian XOR mixtures - an artificial but emblematic problem that intrinsically requires multilevel learning in order to achieve optimal class separation.
translated by 谷歌翻译
The task of automatic text summarization produces a concise and fluent text summary while preserving key information and overall meaning. Recent approaches to document-level summarization have seen significant improvements in recent years by using models based on the Transformer architecture. However, the quadratic memory and time complexities with respect to the sequence length make them very expensive to use, especially with long sequences, as required by document-level summarization. Our work addresses the problem of document-level summarization by studying how efficient Transformer techniques can be used to improve the automatic summarization of very long texts. In particular, we will use the arXiv dataset, consisting of several scientific papers and the corresponding abstracts, as baselines for this work. Then, we propose a novel retrieval-enhanced approach based on the architecture which reduces the cost of generating a summary of the entire document by processing smaller chunks. The results were below the baselines but suggest a more efficient memory a consumption and truthfulness.
translated by 谷歌翻译
Image generation and image completion are rapidly evolving fields, thanks to machine learning algorithms that are able to realistically replace missing pixels. However, generating large high resolution images, with a large level of details, presents important computational challenges. In this work, we formulate the image generation task as completion of an image where one out of three corners is missing. We then extend this approach to iteratively build larger images with the same level of detail. Our goal is to obtain a scalable methodology to generate high resolution samples typically found in satellite imagery data sets. We introduce a conditional progressive Generative Adversarial Networks (GAN), that generates the missing tile in an image, using as input three initial adjacent tiles encoded in a latent vector by a Wasserstein auto-encoder. We focus on a set of images used by the United Nations Satellite Centre (UNOSAT) to train flood detection tools, and validate the quality of synthetic images in a realistic setup.
translated by 谷歌翻译
This work considers the path planning problem for a team of identical robots evolving in a known environment. The robots should satisfy a global specification given as a Linear Temporal Logic (LTL) formula over a set of regions of interest. The proposed method exploits the advantages of Petri net models for the team of robots and B\"uchi automata modeling the specification. The approach in this paper consists in combining the two models into one, denoted Composed Petri net and use it to find a sequence of action movements for the mobile robots, providing collision free trajectories to fulfill the specification. The solution results from a set of Mixed Integer Linear Programming (MILP) problems. The main advantage of the proposed solution is the completeness of the algorithm, meaning that a solution is found when exists, this representing the key difference with our previous work in [1]. The simulations illustrate comparison results between current and previous approaches, focusing on the computational complexity.
translated by 谷歌翻译
Named Entity Recognition and Intent Classification are among the most important subfields of the field of Natural Language Processing. Recent research has lead to the development of faster, more sophisticated and efficient models to tackle the problems posed by those two tasks. In this work we explore the effectiveness of two separate families of Deep Learning networks for those tasks: Bidirectional Long Short-Term networks and Transformer-based networks. The models were trained and tested on the ATIS benchmark dataset for both English and Greek languages. The purpose of this paper is to present a comparative study of the two groups of networks for both languages and showcase the results of our experiments. The models, being the current state-of-the-art, yielded impressive results and achieved high performance.
translated by 谷歌翻译
The goal of autonomous vehicles is to navigate public roads safely and comfortably. To enforce safety, traditional planning approaches rely on handcrafted rules to generate trajectories. Machine learning-based systems, on the other hand, scale with data and are able to learn more complex behaviors. However, they often ignore that agents and self-driving vehicle trajectory distributions can be leveraged to improve safety. In this paper, we propose modeling a distribution over multiple future trajectories for both the self-driving vehicle and other road agents, using a unified neural network architecture for prediction and planning. During inference, we select the planning trajectory that minimizes a cost taking into account safety and the predicted probabilities. Our approach does not depend on any rule-based planners for trajectory generation or optimization, improves with more training data and is simple to implement. We extensively evaluate our method through a realistic simulator and show that the predicted trajectory distribution corresponds to different driving profiles. We also successfully deploy it on a self-driving vehicle on urban public roads, confirming that it drives safely without compromising comfort. The code for training and testing our model on a public prediction dataset and the video of the road test are available at https://woven.mobi/safepathnet
translated by 谷歌翻译